Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 18 de 18
Filter
Add filters

Journal
Document Type
Year range
1.
2022 8th International Engineering Conference on Sustainable Technology and Development (Iec) ; : 12-16, 2022.
Article in English | Web of Science | ID: covidwho-2309721

ABSTRACT

Load balancing techniques are useful for efficient networking systems. In teleconferencing systems, it is not an easy job to balance the loads and obtain efficient performance. The current study tries to suggest a network-based approach for load balancing in teleconferencing systems. The aim is to make use of the concepts of graph theory in practicing and simulating teleconferencing systems. In the suggested approach, each computer in the network is considered as a vertex, an edge will be created between two vertices if they are accessible to each other. The weight of the edge between the two computers specifies the cost of access from one vertex to another. The task of transferring happens between the shortest ways of the two nodes taking into consideration the deadline time of the tasks. In terms of the number of the missed deadline tasks, the proposed approach reflected effectiveness in comparison with other approaches. Using the proposed, it is guarantee to obtain a smooth conferencing among users, which is beneficial for the teleconferencing and e-learning as well. Finally, this proposed method is useful for securing smooth conferencing during further lockdown situation (i.e., COVID situation).

2.
Omega ; : 102801, 2022 Nov 16.
Article in English | MEDLINE | ID: covidwho-2240280

ABSTRACT

This paper introduces mathematical models that support dynamic fair balancing of COVID-19 patients over hospitals in a region and across regions. Patient flow is captured in an infinite server queueing network. The dynamic fair balancing model within a region is a load balancing model incorporating a forecast of the bed occupancy, while across regions, it is a stochastic program taking into account scenarios of the future bed surpluses or shortages. Our dynamic fair balancing models yield decision rules for patient allocation to hospitals within the region and reallocation across regions based on safety levels and forecast bed surplus or bed shortage for each hospital or region. Input for the model is an accurate real-time forecast of the number of COVID-19 patients hospitalised in the ward and the Intensive Care Unit (ICU) of the hospitals based on the predicted inflow of patients, their Length of Stay and patient transfer probabilities among ward and ICU. The required data is obtained from the hospitals' data warehouses and regional infection data as recorded in the Netherlands. The algorithm is evaluated in Dutch regions for allocation of COVID-19 patients to hospitals within the region and reallocation across regions using data from the second COVID-19 peak.

3.
Indonesian Journal of Electrical Engineering and Computer Science ; 30(1):388-393, 2023.
Article in English | Scopus | ID: covidwho-2233109

ABSTRACT

The increasing number of publications towards cloud computing proves that much research and development has been done, especially for task scheduling. Organizations are eager to get more customized technology to run the most smoothly in the provision of visual cloud services for fruity users. As the circumstances of Covid indicate to technology that everyone should run digitally, the workload on machines increased. For workload solutions, organizations are trying to balance the situation with the successful operation of cloud services to use appropriate services/resources. Nevertheless, the issues are still to be resolved by researchers, so we respect all my friends who are putting a lot of effort into developing new techniques. A proposed paper is showing a new collation with the load balancing factor by implementing quality of service (QoS) and virtual machine tree (VMT). A CloudSim toolkit will then be used to compare them. A tree structure graph is included in the VMT algorithm to schedule tasks with the appropriate distribution on each machine. The QoS algorithm performs the task of scheduling based on the service required by the user with the best quality and satisfies the user. © 2023 Institute of Advanced Engineering and Science. All rights reserved.

4.
J Comput Chem ; 44(12): 1174-1188, 2023 May 05.
Article in English | MEDLINE | ID: covidwho-2232813

ABSTRACT

Easy and effective usage of computational resources is crucial for scientific calculations, both from the perspectives of timeliness and economic efficiency. This work proposes a bi-level optimization framework to optimize the computational sequences. Machine-learning (ML) assisted static load-balancing, and different dynamic load-balancing algorithms can be integrated. Consequently, the computational and scheduling engine of the ParaEngine is developed to invoke optimized quantum chemical (QC) calculations. Illustrated benchmark calculations include high-throughput drug suit, solvent model, P38 protein, and SARS-CoV-2 systems. The results show that the usage rate of given computational resources for high throughput and large-scale fragmentation QC calculations can primarily profit, and faster accomplishing computational tasks can be expected when employing high-performance computing (HPC) clusters.

5.
Health Secur ; 21(1): 4-10, 2023.
Article in English | MEDLINE | ID: covidwho-2188075

ABSTRACT

To meet surge capacity and to prevent hospitals from being overwhelmed with COVID-19 patients, a regional crisis task force was established during the first pandemic wave to coordinate the even distribution of COVID-19 patients in the Amsterdam region. Based on a preexisting regional management framework for acute care, this task force was led by physicians experienced in managing mass casualty incidents. A collaborative framework consisting of the regional task force, the national task force, and the region's hospital crisis coordinators facilitated intraregional and interregional patient transfers. After hospital admission rates declined following the first COVID-19 wave, a window of opportunity enabled the task forces to create, standardize, and optimize their patient transfer processes before a potential second wave commenced. Improvement was prioritized according to 3 crucial pillars: process standardization, implementation of new strategies, and continuous evaluation of the decision tree. Implementing the novel "fair share" model as a straightforward patient distribution directive supported the regional task force's decisionmaking. Standardization of the digital patient transfer registration process contributed to a uniform, structured system in which every patient transfer was verifiable on intraregional and interregional levels. Furthermore, the regional task force team was optimized and evaluation meetings were standardized. Lines of communication were enhanced, resulting in increased situational awareness among all stakeholders that indirectly provided a safety net and an improved integral framework for managing COVID-19 care capacities. In this article, we describe enhancements to a patient transfer framework that can serve as an exemplary system to meet surge capacity demands during current and future pandemics.


Subject(s)
COVID-19 , Mass Casualty Incidents , Humans , Surge Capacity , Critical Care
6.
Health Secur ; 2022 Nov 18.
Article in English | MEDLINE | ID: covidwho-2134708

ABSTRACT

Within weeks of New York State's first confirmed case of COVID-19, New York City became the epicenter of the nation's COVID-19 pandemic. With more than 80,000 COVID-19 hospitalizations during the first wave alone, hospitals in downstate New York were forced to adapt existing procedures to manage the surge and care for patients facing a novel disease. Given the unprecedented surge, effective patient load balancing-moving patients from a hospital with diminishing capacity to another hospital within the same health system with relatively greater capacity-became chief among the capabilities required of New York health systems. The Greater New York Hospital Association invited members of downstate New York's 6 largest health systems to talk about how each of their systems evolved their patient load balancing procedures throughout the pandemic. Informed by their insights, experiences, lessons learned, and collaboration, we collectively present a set of consensus recommendations and best practices for patient load balancing at the facility and health system level, which may inform regional approaches to patient load balancing.

7.
Health Secur ; 20(S1): S71-S84, 2022 Jun.
Article in English | MEDLINE | ID: covidwho-2097250

ABSTRACT

In fall 2020, COVID-19 infections accelerated across the United States. For many states, a surge in COVID-19 cases meant planning for the allocation of scarce resources. Crisis standards of care planning focuses on maintaining high-quality clinical care amid extreme operating conditions. One of the primary goals of crisis standards of care planning is to use all preventive measures available to avoid reaching crisis conditions and the complex triage decisionmaking involved therein. Strategies to stay out of crisis must respond to the actual experience of people on the frontlines, or the "ground truth," to ensure efforts to increase critical care bed numbers and augment staff, equipment, supplies, and medications to provide an effective response to a public health emergency. Successful management of a surge event where healthcare needs exceed capacity requires coordinated strategies for scarce resource allocation. In this article, we examine the ground truth challenges encountered in response efforts during the fall surge of 2020 for 2 states-Nebraska and California-and the strategies each state used to enable healthcare facilities to stay out of crisis standards of care. Through these 2 cases, we identify key tools deployed to reduce surge and barriers to coordinated statewide support of the healthcare infrastructure. Finally, we offer considerations for operationalizing key tools to alleviate surge and recommendations for stronger statewide coordination in future public health emergencies.


Subject(s)
COVID-19 , Disaster Planning , COVID-19/prevention & control , Critical Care , Delivery of Health Care , Humans , Resource Allocation , Surge Capacity , Triage , United States
8.
1st International Conference on Artificial Intelligence Trends and Pattern Recognition, ICAITPR 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2018782

ABSTRACT

Today, Cloud Computing is a distributed system environment. These days the services are available pay as you go model. Cloud users are paying as per their services in the cloud environment. The services available to the Cloud users are Infrastructure as a service, platform as a service, software as a service and security as a service. Nowadays, most users are migrating to cloud platforms. In Covid-19 pandemic situation, most large and small scale organizations operating their business using cloud platforms. On the other end due to industrial automation, the companies switched their operations to a cloud environments. Due to the rapid business migration, the demand for cloud computing increased. With the increase of demand in the cloud, the service providers are satisfied. On the other end, a challenging issue is resource allocation. The best resource allocation strategy will provide quick services to the cloud users and minimum cost to the cloud providers. In this paper, we will discuss, resource allocation procedure, the throttled load balancing algorithm and the results are compared with other resource optimization techniques. © 2022 IEEE.

9.
8th International Engineering Conference on Sustainable Technology and Development: Towards Engineering Innovations and Sustainability, IEC 2022 ; : 12-16, 2022.
Article in English | Scopus | ID: covidwho-1985477

ABSTRACT

Load balancing techniques are useful for efficient networking systems. In teleconferencing systems, it is not an easy job to balance the loads and obtain efficient performance. The current study tries to suggest a network-based approach for load balancing in teleconferencing systems. The aim is to make use of the concepts of graph theory in practicing and simulating teleconferencing systems. In the suggested approach, each computer in the network is considered as a vertex, an edge will be created between two vertices if they are accessible to each other. The weight of the edge between the two computers specifies the cost of access from one vertex to another. The task of transferring happens between the shortest ways of the two nodes taking into consideration the deadline time of the tasks. In terms of the number of the missed deadline tasks, the proposed approach reflected effectiveness in comparison with other approaches. Using the proposed, it is guarantee to obtain a smooth conferencing among users, which is beneficial for the teleconferencing and e-learning as well. Finally, this proposed method is useful for securing smooth conferencing during further lockdown situation (i.e., COVID situation). © 2022 IEEE.

10.
31st Annual Conference of the European Association for Education in Electrical and Information Engineering, EAEEIE 2022 ; 2022.
Article in English | Scopus | ID: covidwho-1973463

ABSTRACT

Due to the SARS-COV-2 pandemic, educational institutions are immediately faced with a new challenge to adapt, forcing the transition from face-to-face teaching to distance learning in a short period. Distance education supported by technology is a challenge for educational institutions based on binomial technology/teaching. This paper presents a proposal for an e-learning technology structure, supported by a cluster of servers capable of responding to the requirements of distance learning based on the premises of High Availability, High Performance, Load Balancing. The beginning of this study consisted of a literature review to find the various existing technologies, a way to combine them and create a system capable of providing the necessary functionalities, and whose performance could host all the users of an institution simultaneously. The implemented system results from this combination of technologies and allows its capacity to be scaled at any moment according to momentary needs. In technological terms, the solution was based on a free Linux distribution, the Ubuntu Server installed inside a cluster of servers with VMware ESXi, and a cluster of database nodes based on Gallera technology. The eLearning platform used in this study was Moodle because it is one of the resources most used by institutions. The aspects of teaching, provision of content and execution of evaluation tests, were explored. With the implementation of the presented scenario, it was possible to guarantee the High Availability and load balancing of the platform and guarantee a high performance of the whole solution. © 2022 IEEE.

11.
Croatian Operational Research Review ; 13(1):99-111, 2022.
Article in English | ProQuest Central | ID: covidwho-1955178

ABSTRACT

Goods from warehouses must be scheduled in advance, prepared, routed, and delivered to shops. At least three systems directly interact within such a process: warehouse workforce scheduling, delivery scheduling, and routing system. Ideally, the whole problem with the preceding inventory management (restocking) would be solved in one optimization pass. In order to make the problem simpler, we first decompose the total problem by isolating the delivery scheduling. Then we connect the optimization model to the rest of the system by workload balancing goal that is a surrogate of coordination and criterion for the system robustness. This paper presents the practical application of top-down discrete optimization that streamlines operations and enables better reactivity to changes in circumstances. We search for repetitive weekly delivery patterns that balance the daily warehouse and transportation utilization in the absence of capacity constraints. Delivery patterns are optimized for the quality criteria regarding specific store-warehouse pair types, with a special focus on fresh food delivery that aims at reducing inventory write-offs due to aging. The previous setup included semi-manual scheduling based on templates, historical prototypes, and domain knowledge. We have found that the system augmented with the new automated delivery scheduling system brings an improvement of 3% in the performance measure as well as speed in adjusting to the changes, such was the case with changes in policies during COVID-19 lockdowns.

12.
Applied Sciences ; 12(13):6474, 2022.
Article in English | ProQuest Central | ID: covidwho-1933959

ABSTRACT

Natural disasters have a significant impact on human welfare. In recent years, disasters are more violent and frequent due to climate change, so their impact may be higher if no preemptive measures are taken. In this context, real-time data processing and analysis have shown great potential to support decision-making, rescue, and recovery after a disaster. However, disaster scenarios are challenging due to their highly dynamic nature. In particular, we focus on data traffic and available processing resources. In this work, we propose SLedge—an edge-based processing model that enables mobile devices to support stream processing systems’ tasks under post-disaster scenarios. SLedge relies on a two-level control loop that automatically schedules SPS’s tasks over mobile devices to increase the system’s resilience, reduce latency, and provide accurate outputs. Our results show that SLedge can outperform a cloud-based infrastructure in terms of latency while keeping a low overhead. SLedge processes data up to five times faster than a cloud-based architecture while improving load balancing among processing resources, dealing better with traffic spikes, and reducing data loss and battery drain.

13.
Artif Intell Rev ; 55(3): 2529-2573, 2022.
Article in English | MEDLINE | ID: covidwho-1888909

ABSTRACT

Cloud computing is new technology that has considerably changed human life at different aspect over the last decade. Especially after the COVID-19 pandemic, almost all life activity shifted into cloud base. Cloud computing is a utility where different hardware and software resources are accessed on pay per user ground base. Most of these resources are available in virtualized form and virtual machine (VM) is one of the main elements of visualization.VM used in data center for distribution of resource and application according to benefactor demand. Cloud data center faces different issue in respect of performance and efficiency for improvement of these issues different approaches are used. Virtual machine play important role for improvement of data center performance therefore different approach are used for improvement of virtual machine efficiency (i-e) load balancing of resource and task. For the improvement of this section different parameter of VM improve like makespan, quality of service, energy, data accuracy and network utilization. Improvement of different parameter in VM directly improve the performance of cloud computing. Therefore, we conducting this review paper that we can discuss about various improvements that took place in VM from 2015 to 20,201. This review paper also contain information about various parameter of cloud computing and final section of paper present the role of machine learning algorithm in VM as well load balancing approach along with the future direction of VM in cloud data center.

14.
16th Conference on Information Systems Management, ISM 2021 and Information Systems and Technologies conference track, FedCSIS-IST 2021 Held as Part of 16th Conference on Computer Science and Information Systems, FedCSIS 2021 ; 442 LNBIP:97-116, 2022.
Article in English | Scopus | ID: covidwho-1797702

ABSTRACT

The insurgence of the COVID pandemic calls for mass vaccination campaigns worldwide. Pharmaceutical companies struggle to ramp up their production to meet the demand for vaccines but cannot always guarantee a perfectly regular delivery schedule. On the other hand, governments must devise plans to have most of their population vaccinated in the shortest possible time and have the vaccine booster administered after a precise time interval. The combination of delivery uncertainties and those time requirements may make such planning difficult. In this paper, we propose several heuristic strategies to meet those requirements in the face of delivery uncertainties. The outcome of those strategies is a daily vaccination plan that suggests how many initial doses and boosters can be administered each day. We compare the results with the optimal plan obtained through linear programming, which however assumes that we know in advance the whole delivery schedule. As for performance metrics, we consider both the vaccination time (which has to be as low as possible) and the balance between vaccination capacities over time (which has to be as uniform as possible). The strategies achieving the best trade-off between those competing requirements turn out to be the q-days ahead strategies, which put aside doses to guarantee that we do not run out of stock on just the next q days. Increasing the look-ahead period, i.e. q, allows to achieve a lower number of out-of-stock days, though worsening the other performance indicators. © 2022, Springer Nature Switzerland AG.

15.
Data Science for COVID-19 Volume 1: Computational Perspectives ; : 195-212, 2021.
Article in English | Scopus | ID: covidwho-1787944

ABSTRACT

In this work, we are introducing a novel assessment methodology for evaluating a prototype web service-based system for COVID-19 disease processing system by using cluster-based web server. We call it as PwCOV. The service generates clinical instructions and process respective information for distributed disease data sets. It follows the business processes and principles of service-oriented computing for each end user request. The assessment methodology illustrates different aspects of service deployment for massive growth of service users. In this study, the PwCOV is observed to be stable up to the stress level of 1700 simultaneous users. The response time of 14.35s, throughput of 8592 bytes/s, and central processing unit (CPU) utilization of 22.16% with a strong reliability of service execution is observed. However, the reliability of PwCOV execution degrades beyond that execution limit. For 1800 simultaneous users of the service, the response time, throughput and CPU utilization is recorded to be 25.28 s, 15, 729 bytes/s, and 39.13%, respectively. During this stress, the service failure rate of 35% is observed. A moderate reliability of 70% of service period is observed for 1800 users. The propose study also discusses the impact of system metric, reliability, and their correlation over the service execution. The statistical analysis is carried out to study the viability, acceptability, applicability of such deployment for COVID-19 disease processing system. The limitation of PwCOV for processing geographically scattered data sets is also discussed. © 2021 Elsevier Inc. All rights reserved.

16.
2021 Winter Simulation Conference, WSC 2021 ; 2021-December, 2021.
Article in English | Scopus | ID: covidwho-1746018

ABSTRACT

In most emergency medical services (EMS) systems, patients are transported by ambulance to the closest most appropriate hospital. However, in extreme cases, such as the COVID-19 pandemic, this policy may lead to hospital overloading, which can have detrimental effects on patients. To address this concern, we propose an optimization-based, data-driven hospital load balancing approach. The approach finds a trade-off between short transport times for patients that are not high acuity while avoiding hospital overloading. In order to test the new rule, we build a simulation model, tailored for New York City's EMS system. We use historical EMS incident data from the worst weeks of the pandemic as a model input. Our simulation indicates that 911 patient load balancing is beneficial to hospital occupancy rates and is a reasonable rule for non-critical 911 patient transports. The load balancing rule has been recently implemented in New York City's EMS system. © 2021 IEEE.

17.
Chest ; 161(2): 429-447, 2022 02.
Article in English | MEDLINE | ID: covidwho-1401309

ABSTRACT

BACKGROUND: After the publication of a 2014 consensus statement regarding mass critical care during public health emergencies, much has been learned about surge responses and the care of overwhelming numbers of patients during the COVID-19 pandemic. Gaps in prior pandemic planning were identified and require modification in the midst of severe ongoing surges throughout the world. RESEARCH QUESTION: A subcommittee from The Task Force for Mass Critical Care (TFMCC) investigated the most recent COVID-19 publications coupled with TFMCC members anecdotal experience in order to formulate operational strategies to optimize contingency level care, and prevent crisis care circumstances associated with increased mortality. STUDY DESIGN AND METHODS: TFMCC adopted a modified version of established rapid guideline methodologies from the World Health Organization and the Guidelines International Network-McMaster Guideline Development Checklist. With a consensus development process incorporating expert opinion to define important questions and extract evidence, the TFMCC developed relevant pandemic surge suggestions in a structured manner, incorporating peer-reviewed literature, "gray" evidence from lay media sources, and anecdotal experiential evidence. RESULTS: Ten suggestions were identified regarding staffing, load-balancing, communication, and technology. Staffing models are suggested with resilience strategies to support critical care staff. ICU surge strategies and strain indicators are suggested to enhance ICU prioritization tactics to maintain contingency level care and to avoid crisis triage, with early transfer strategies to further load-balance care. We suggest that intensivists and hospitalists be engaged with the incident command structure to ensure two-way communication, situational awareness, and the use of technology to support critical care delivery and families of patients in ICUs. INTERPRETATION: A subcommittee from the TFMCC offers interim evidence-informed operational strategies to assist hospitals and communities to plan for and respond to surge capacity demands resulting from COVID-19.


Subject(s)
Advisory Committees , COVID-19 , Critical Care , Delivery of Health Care/organization & administration , Surge Capacity , Triage , COVID-19/epidemiology , COVID-19/therapy , Critical Care/methods , Critical Care/organization & administration , Evidence-Based Practice/methods , Evidence-Based Practice/organization & administration , Humans , SARS-CoV-2 , Surge Capacity/organization & administration , Surge Capacity/standards , Triage/methods , Triage/standards , United States/epidemiology
18.
JMIR Mhealth Uhealth ; 8(12): e22098, 2020 12 01.
Article in English | MEDLINE | ID: covidwho-951740

ABSTRACT

We evaluate a Bluetooth-based mobile contact-confirming app, COVID-19 Contact-Confirming Application (COCOA), which is being used in Japan to contain the spread of COVID-19, the disease caused by the novel virus termed SARS-COV-2. The app prioritizes the protection of users' privacy from a variety of parties (eg, other users, potential attackers, and public authorities), enhances the capacity to balance the current load of excessive pressure on health care systems (eg, local triage of exposure risk and reduction of in-person hospital visits), increases the speed of responses to the pandemic (eg, automated recording of close contact based on proximity), and reduces operation errors and population mobility. The peer-to-peer framework of COCOA is intended to provide the public with dynamic and credible updates on the COVID-19 pandemic without sacrificing the privacy of their information. However, cautions must be exercised to address critical concerns, such as the rate of participation and delays in data sharing. The results of a simulation imply that the participation rate in Japan needs to be close 90% to effectively control the spread of COVID-19.


Subject(s)
COVID-19/prevention & control , COVID-19/transmission , Contact Tracing/methods , Mobile Applications/standards , Public Health Surveillance/methods , Humans , Japan , Pandemics/prevention & control
SELECTION OF CITATIONS
SEARCH DETAIL